Goto

Collaborating Authors

 long text



96671501524948bc3937b4b30d0e57b9-Paper.pdf

Neural Information Processing Systems

BERT is incapable of processing long texts due to its quadratically increasing memory andtimeconsumption. Themost natural waystoaddress thisproblem, such as slicing the text by a sliding window or simplifying transformers, suffer from insufficient long-range attentions orneed customized CUDAkernels.


CogLTX: Applying BERT to Long Texts

Neural Information Processing Systems

BERTs are incapable of processing long texts due to its quadratically increasing memory and time consumption. The straightforward thoughts to address this problem, such as slicing the text by a sliding window or simplifying transformers, suffer from insufficient long-range attentions or need customized CUDA kernels. The limited text length of BERT reminds us the limited capacity (5 9 chunks) of the working memory of humans - then how do human beings Cognize Long TeXts? Founded on the cognitive theory stemming from Baddeley, our CogLTX framework identifies key sentences by training a judge model, concatenates them for reasoning and enables multi-step reasoning via rehearsal and decay. Since relevance annotations are usually unavailable, we propose to use treatment experiments to create supervision. As a general algorithm, CogLTX outperforms or gets comparable results to SOTA models on NewsQA, HotpotQA, multi-class and multi-label long-text classification tasks with memory overheads independent of the text length.


LoPT: Lossless Parallel Tokenization Acceleration for Long Context Inference of Large Language Model

Shao, Wei, Zheng, Lingchao, Wang, Pengyu, Zheng, Peizhen, Li, Jun, Fan, Yuwei

arXiv.org Artificial Intelligence

Long context inference scenarios have become increasingly important for large language models, yet they introduce significant computational latency. While prior research has optimized long-sequence inference through operators, model architectures, and system frameworks, tokenization remains an overlooked bottleneck. Existing parallel tokenization methods accelerate processing through text segmentation and multi-process tokenization, but they suffer from inconsistent results due to boundary artifacts that occur after merging. To address this, we propose LoPT, a novel Lossless Parallel Tokenization framework that ensures output identical to standard sequential tokenization. Our approach employs character-position-based matching and dynamic chunk length adjustment to align and merge tokenized segments accurately. Extensive experiments across diverse long-text datasets demonstrate that LoPT achieves significant speedup while guaranteeing lossless tokenization. We also provide theoretical proof of consistency and comprehensive analytical studies to validate the robustness of our method.


LooGLE v2: Are LLMs Ready for Real World Long Dependency Challenges?

He, Ziyuan, Wang, Yuxuan, Li, Jiaqi, Liang, Kexin, Zhang, Muhan

arXiv.org Artificial Intelligence

Large language models (LLMs) are equipped with increasingly extended context windows recently, yet their long context understanding capabilities over long dependency tasks remain fundamentally limited and underexplored. This gap is especially significant in many real-world long-context applications that were rarely benchmarked. In this paper, we introduce LooGLE v2, a novel benchmark designed to evaluate LLMs' long context ability in real-world applications and scenarios. Our benchmark consists of automatically collected real-world long texts, ranging from 16k to 2M tokens, encompassing domains in law, finance, game and code. Accordingly, we delicately design 10 types of domain-specific long-dependency tasks and generate 1,934 QA instances with various diversity and complexity in a scalable data curation pipeline for further practical needs. We conduct a comprehensive assessment of 6 locally deployed and 4 API-based LLMs. The evaluation results show that even the best-performing model achieves only a 59.2% overall score on our benchmark. Despite the extensive context windows, popular LLMs are only capable of understanding a much shorter length of context than they claim to be, revealing significant limitations in their ability to handle real-world tasks with long dependencies and highlighting substantial room for model improvement in practical long-context understanding.



CogL TX: Applying BERT to Long Texts

Neural Information Processing Systems

BERT is incapable of processing long texts due to its quadratically increasing memory and time consumption. The most natural ways to address this problem, such as slicing the text by a sliding window or simplifying transformers, suffer from insufficient long-range attentions or need customized CUDA kernels.



LRSCLIP: A Vision-Language Foundation Model for Aligning Remote Sensing Image with Longer Text

Chen, Weizhi, Chen, Jingbo, Deng, Yupeng, Chen, Jiansheng, Feng, Yuman, Xi, Zhihao, Liu, Diyou, Li, Kai, Meng, Yu

arXiv.org Artificial Intelligence

--This study addresses the technical bottlenecks in handling long text and the "hallucination" issue caused by insufficient short text information in remote sensing vision-language foundation models (VLFM). We propose a novel vision-language foundation model, LRSCLIP, and a multimodal dataset, LRS2M. The main contributions are as follows: (1) By integrating multi-source remote sensing data and adopting a large language model labeling strategy, we construct the LRS2M dataset, which contains 2 million image-text pairs, providing both short and long texts for the first time, thus solving the problem of semantic granularity limitations in existing datasets; (2) The design of the LRSCLIP architecture based on Long-CLIP's KPS module, which extends CLIP's text processing capacity and achieves fine-grained cross-modal feature alignment through a dual-text loss weighting mechanism. Experimental results show that LRSCLIP improves retrieval accuracy by 10%-20% over the Long-CLIP baseline in the zero-shot long-text cross-modal retrieval task. For the zero-shot short-text cross-modal retrieval task, LRSCLIP achieves improvements over the current best model, GeoRSCLIP, with increases of 0.17%, 0.67%, and 0.92% in T ext to Image R@1, Image to T ext R@1, and mR on RSITMD, respectively, and 0.04%, 2.93%, and 1.28% on RSICD. This work provides a new benchmark model and data support for remote sensing multimodal learning. ECENT years have seen significant progress in foundation models (FM) within the fields of computer vision (CV) and natural language processing (NLP) [1] [2] [3] [4] [5] [6] [7] [8]. This research was funded by the National Key R&D Program of China under grant number 2021YFB3900504. Weizhi Chen, Kai Li are with Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China, and also with School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China. Jingbo Chen, Y upeng Deng, Jiansheng Chen, Zhihao Xi, Diyou Liu, Y u Meng are with Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100101, China. Y uman Feng is with the School of Information Network Security, People's Public Security University of China, Beijing 100038, China. Unlike models designed for specific task objectives, VLFM learns joint representations of massive image-text pairs in upstream tasks and then transfers this knowledge to various downstream tasks, demonstrating exceptional performance. Several outstanding VLFM models have already emerged, such as CLIP [10], BLIP [11] [12], and MaskVLM [13]. Meanwhile, researchers have begun exploring the application potential of VLFM in the remote sensing domain. However, VLFM often faces issues related to the long-tail effect (where a small number of classes dominate while the rest have fewer samples), making direct application to remote sensing tasks challenging [14].


A LongFormer-Based Framework for Accurate and Efficient Medical Text Summarization

Sun, Dan, He, Jacky, Zhang, Hanlu, Qi, Zhen, Zheng, Hongye, Wang, Xiaokai

arXiv.org Artificial Intelligence

This paper proposes a medical text summarization method based on LongFormer, aimed at addressing the challenges faced by existing models when processing long medical texts. Traditional summarization methods are often limited by short-term memory, leading to information loss or reduced summary quality in long texts. LongFormer, by introducing long-range self-attention, effectively captures long-range dependencies in the text, retaining more key information and improving the accuracy and information retention of summaries. Experimental results show that the LongFormer-based model outperforms traditional models, such as RNN, T5, and BERT in automatic evaluation metrics like ROUGE. It also receives high scores in expert evaluations, particularly excelling in information retention and grammatical accuracy. However, there is still room for improvement in terms of conciseness and readability. Some experts noted that the generated summaries contain redundant information, which affects conciseness. Future research will focus on further optimizing the model structure to enhance conciseness and fluency, achieving more efficient medical text summarization. As medical data continues to grow, automated summarization technology will play an increasingly important role in fields such as medical research, clinical decision support, and knowledge management.